4,100 research outputs found

    Perkinsus marinus tissue distribution and seasonal variation in oysters Crassostrea virginica from Florida, Virginia and New York

    Get PDF
    Perkinsus marinus infection intensity was measured in eastern oysters Crassostrea virginica collected in October and December 1993, and March, May, and July 1994 from 3 U.S. sites: Apalachicola Bay (FL), Chesapeake Bay (VA), and Oyster Bay (Mr\u27). Gill, mantle, digestive gland. adductor muscle, hemolymph, and remaining tissue (including gonadal material and rectum) were dissected from 20 oysters from each site at each collection time. Samples were separately diagnosed for P. marin us infections by incubation in Ray\u27s Fluid Thioglycollate Medium (RFTM) and subsequent microscopic quantification of purified enlarged hypnospores. At all sampling times and sites, average P. marinus infection intensity (g wet wt tissue(-1) or ml hemolymph(-1)) was lowest in hemolymph samples, and generally highest in the digestive gland. Perkinsus marinus prevalence was 100% at both FL and NY sites for each of the 5 collection times, and, for the VA site, was less than 100% in only 1 month (May 1994). Seasonal intensity patterns and mean total body burdens differed among the sites. Average body burden was highest in VA during October and progressively declined to a minimum in May. This decline was probably due to mortality of heavily infected oysters and diminution of parasite activity associated with colder temperatures and reduced salinities. Intensities varied little during the months of October and December at both the FL and NY sites. Minimum average intensities were observed in March in FL oysters and May in NY oysters. Relatively high P. marinus infection levels that persisted throughout the winter in NY oysters compared with VA oysters could reflect constant high salinity in Long Island Sound which favors parasite activity, and also rapid decline in temperature in the fall that may have prevented epizootic oyster mortalities

    In vivo and ex vivo analyses of amyloid toxicity in the Tc1 mouse model of Down syndrome.

    Get PDF
    RATIONALE: The prevalence of Alzheimer's disease is increased in people with Down syndrome. The pathology appears much earlier than in the general population, suggesting a predisposition to develop Alzheimer's disease. Down syndrome results from trisomy of human chromosome 21, leading to overexpression of possible Alzheimer's disease candidate genes, such as amyloid precursor protein gene. To better understand how the Down syndrome context results in increased vulnerability to Alzheimer's disease, we analysed amyloid-β [25-35] peptide toxicity in the Tc1 mouse model of Down syndrome, in which ~75% of protein coding genes are functionally trisomic but, importantly, not amyloid precursor protein. RESULTS: Intracerebroventricular injection of oligomeric amyloid-β [25-35] peptide in three-month-old wildtype mice induced learning deficits, oxidative stress, synaptic marker alterations, activation of glycogen synthase kinase-3β, inhibition of protein kinase B (AKT), and apoptotic pathways as compared to scrambled peptide-treated wildtype mice. Scrambled peptide-treated Tc1 mice presented high levels of toxicity markers as compared to wildtype mice. Amyloid-β [25-35] peptide injection in Tc1 mice induced significant learning deficits and enhanced glycogen synthase kinase-3β activity in the cortex and expression of apoptotic markers in the hippocampus and cortex. Interestingly, several markers, including oxidative stress, synaptic markers, glycogen synthase kinase-3β activity in the hippocampus and AKT activity in the hippocampus and cortex, were unaffected by amyloid-β [25-35] peptide injection in Tc1 mice. CONCLUSIONS: Tc1 mice present several toxicity markers similar to those observed in amyloid-β [25-35] peptide-treated wildtype mice, suggesting that developmental modifications in these mice modify their response to amyloid peptide. However, amyloid toxicity led to severe memory deficits in this Down syndrome mouse model

    A Bragg glass phase in the vortex lattice of a type II superconductor

    Full text link
    Although crystals are usually quite stable, they are sensitive to a disordered environment: even an infinitesimal amount of impurities can lead to the destruction of the crystalline order. The resulting state of matter has been a longstanding puzzle. Until recently it was believed to be an amorphous state in which the crystal would break into crystallites. But a different theory predicts the existence of a novel phase of matter: the so-called Bragg glass, which is a glass and yet nearly as ordered as a perfect crystal. The lattice of vortices that can contain magnetic flux in type II superconductors provide a good system to investigate these ideas. Here we show that neutron diffraction data of the vortex lattice in type II superconductors provides unambiguous evidence for a weak, power-law decay of the crystalline order characteristic of a Bragg glass. The theory also predicts accurately the electrical transport properties of superconductors; it naturally explains the observed phase transition and the dramatic jumps in the critical current associated with the melting of the Bragg glass. Moreover the model explains experiments as diverse as X-ray scattering in disordered liquid crystals and conductivity of electronic crystals.Comment: 9 pages, 4 figure

    The challenges of extending climate risk insurance to fisheries

    Get PDF
    This is the author accepted manuscript. The final version is available from Nature Research via the DOI in this recordNatural Environment Research Council (NERC)Centre for Environment, Fisheries and Aquaculture Science (Cefas)Willis Research NetworkCommonwealth Marine Economies Programme, UK Foreign and Commonwealth Offic

    SOD1 Function and Its Implications for Amyotrophic Lateral Sclerosis Pathology: New and Renascent Themes

    Get PDF
    The canonical role of superoxide dismutase 1 (SOD1) is as an antioxidant enzyme protecting the cell from reactive oxygen species toxicity. SOD1 was also the first gene in which mutations were found to be causative for the neurodegenerative disease amyotrophic lateral sclerosis (ALS), more than 20 years ago. ALS is a relentless and incurable mid-life onset disease, which starts with a progressive paralysis and usually leads to death within 3 to 5 years of diagnosis; in the majority of cases, the intellect appears to remain intact while the motor system degenerates. It rapidly became clear that when mutated SOD1 takes on a toxic gain of function in ALS. However, this novel function remains unknown and many cellular systems have been implicated in disease. Now it seems that SOD1 may play a rather larger role in the cell than originally realized, including as a key modulator of glucose signaling (at least so far in yeast) and in RNA binding. Here, we consider some of the new findings for SOD1 in health and disease, which may shed light on how single amino acid changes at sites throughout this protein can cause devastating neurodegeneration in the mammalian motor system

    SAT-based Explicit LTL Reasoning

    Full text link
    We present here a new explicit reasoning framework for linear temporal logic (LTL), which is built on top of propositional satisfiability (SAT) solving. As a proof-of-concept of this framework, we describe a new LTL satisfiability tool, Aalta\_v2.0, which is built on top of the MiniSAT SAT solver. We test the effectiveness of this approach by demonnstrating that Aalta\_v2.0 significantly outperforms all existing LTL satisfiability solvers. Furthermore, we show that the framework can be extended from propositional LTL to assertional LTL (where we allow theory atoms), by replacing MiniSAT with the Z3 SMT solver, and demonstrating that this can yield an exponential improvement in performance

    A Bayesian method for evaluating and discovering disease loci associations

    Get PDF
    Background: A genome-wide association study (GWAS) typically involves examining representative SNPs in individuals from some population. A GWAS data set can concern a million SNPs and may soon concern billions. Researchers investigate the association of each SNP individually with a disease, and it is becoming increasingly commonplace to also analyze multi-SNP associations. Techniques for handling so many hypotheses include the Bonferroni correction and recently developed Bayesian methods. These methods can encounter problems. Most importantly, they are not applicable to a complex multi-locus hypothesis which has several competing hypotheses rather than only a null hypothesis. A method that computes the posterior probability of complex hypotheses is a pressing need. Methodology/Findings: We introduce the Bayesian network posterior probability (BNPP) method which addresses the difficulties. The method represents the relationship between a disease and SNPs using a directed acyclic graph (DAG) model, and computes the likelihood of such models using a Bayesian network scoring criterion. The posterior probability of a hypothesis is computed based on the likelihoods of all competing hypotheses. The BNPP can not only be used to evaluate a hypothesis that has previously been discovered or suspected, but also to discover new disease loci associations. The results of experiments using simulated and real data sets are presented. Our results concerning simulated data sets indicate that the BNPP exhibits both better evaluation and discovery performance than does a p-value based method. For the real data sets, previous findings in the literature are confirmed and additional findings are found. Conclusions/Significance: We conclude that the BNPP resolves a pressing problem by providing a way to compute the posterior probability of complex multi-locus hypotheses. A researcher can use the BNPP to determine the expected utility of investigating a hypothesis further. Furthermore, we conclude that the BNPP is a promising method for discovering disease loci associations. © 2011 Jiang et al

    Verification of Decision Making Software in an Autonomous Vehicle: An Industrial Case Study

    Get PDF
    Correctness of autonomous driving systems is crucial as\ua0incorrect behaviour may have catastrophic consequences. Many different\ua0hardware and software components (e.g. sensing, decision making, actuation,\ua0and control) interact to solve the autonomous driving task, leading to a level of complexity that brings new challenges for the formal verification\ua0community. Though formal verification has been used to prove\ua0correctness of software, there are significant challenges in transferring\ua0such techniques to an agile software development process and to ensure\ua0widespread industrial adoption. In the light of these challenges, the identification\ua0of appropriate formalisms, and consequently the right verification\ua0tools, has significant impact on addressing them. In this paper, we\ua0evaluate the application of different formal techniques from supervisory\ua0control theory, model checking, and deductive verification to verify existing\ua0decision and control software (in development) for an autonomous\ua0vehicle. We discuss how the verification objective differs with respect tothe choice of formalism and the level of formality that can be applied.\ua0Insights from the case study show a need for multiple formal methods to\ua0prove correctness, the difficulty to capture the right level of abstraction\ua0to model and specify the formal properties for the verification objectives
    • …
    corecore